An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
نویسندگان
چکیده
This article proposes a new algorithm for solving a class of composite convex-concave saddlepoint problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov’s accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for any constant choice of proximal stepsize. Moreover, a suitable choice of the latter stepsize yields a method with the best known (accelerated inner) iteration complexity for the aforementioned class of saddle-point problems. In contrast to the smoothing technique of [12], our accelerated method does not assume that feasible set is bounded due to its proximal point nature. Experiment results on three problem sets show that the new method significantly outperforms Nesterov’s smoothing technique of [12] as well as a recently proposed accelerated primal-dual method in [5].
منابع مشابه
An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems
This paper describes an accelerated HPE-type method based on general Bregman distances for solving monotone saddle-point (SP) problems. The algorithm is a special instance of a non-Euclidean hybrid proximal extragradient framework introduced by Svaiter and Solodov [28] where the prox sub-inclusions are solved using an accelerated gradient method. It generalizes the accelerated HPE algorithm pre...
متن کاملAccelerating Block-Decomposition First-Order Methods for Solving Composite Saddle-Point and Two-Player Nash Equilibrium Problems
This article considers the two-player composite Nash equilibrium (CNE) problem with a separable non-smooth part, which is known to include the composite saddle-point (CSP) problem as a special case. Due to its two-block structure, this problem can be solved by any algorithm belonging to the block-decomposition hybrid proximal-extragradient (BD-HPE) framework proposed in [13]. The framework cons...
متن کاملStochastic Variance Reduction Methods for Saddle-Point Problems
We consider convex-concave saddle-point problems where the objective functionsmay be split in many components, and extend recent stochastic variance reductionmethods (such as SVRG or SAGA) to provide the first large-scale linearly conver-gent algorithms for this class of problems which are common in machine learning.While the algorithmic extension is straightforward, it come...
متن کاملMirror Prox Algorithm for Multi-Term Composite Minimization and Alternating Directions
In the paper, we develop a composite version of Mirror Prox algorithm for solving convex-concave saddle point problems and monotone variational inequalities of special structure, allowing to cover saddle point/variational analogies of what is usually called “composite minimization” (minimizing a sum of an easy-to-handle nonsmooth and a general-type smooth convex functions “as if” there were no ...
متن کاملAccelerated gradient sliding for structured convex optimization
Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show that the AGS method can skip the gradient...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- SIAM Journal on Optimization
دوره 26 شماره
صفحات -
تاریخ انتشار 2016